AI Summary and Newly Learned
AI Summary and Newly Learned.
2025-01-05
I was making a summary of Meetup with Audrey & Glen in o1 Pro and found it more interesting than a simple AI Summary.
What is it?
I can only accept knowledge that is one step ahead of what I have now.
QV is what we knew from the beginning.
'Using QVs to vote in hackathons is useful for identifying synergies' is new to me.
Quadratic Voting is useful for finding synergies
AI summary of the whole thing drops this
It would be like, "The usefulness of the QV was also discussed."
Interesting" is subjective.
When it's inside the boundaries of my knowledge, it's "not interesting because I already know."
If it's outside my knowledge, it's "not interesting because I don't know better."
When it comes to areas in which I know more than others, "summary to suit the majority" tends to be the former, which is not interesting.
Not getting a proper summary by not being able to input into the AI where the boundaries of my knowledge are.
This is a lack of skill on the part of those who use them.
And yet we blame AI, we blame others.
There are two interpretations of the pyramid of knowledge notation.
the concrete at the bottom and the abstract at the top
Abstract knowledge is built on top of concrete knowledge.
In explaining to others, there are things below that should be known first, and then things above that should be explained.
As a physical metaphor, the latter is more characteristic of the "dependency of understanding"
has merged with another story of "concrete examples first and abstract models later" to become the former.
We believe that only an abstract model that is properly connected to the concrete can be delved into and lead to concrete action.
Conversely, there are those who are fluffing around with abstract models that are not connected to the concrete (Rootless Knowledge).
When speaking to an unspecified number of people in lectures, writing, etc., you don't know what they know and what they don't know, so you pile up explanations on top of the assumed knowledge that they "know at least this much.
For those of us who already know a lot, this bottom part is not interesting.
I was explaining what Quadratic Voting is, but my lecture notes don't include an explanation of Quadratic Voting itself, because I know
But "Quadratic Voting is useful for finding synergies" is something I heard on the day and thought "this is an interesting story I've never heard before" and wrote it down in my notebook
Taking notes from on-site participation differs from the usual summary preparation from video transcriptions.
The "what do you find interesting" data is being recorded.
Tackling East Asia’s Population Decline Issues with Local Coop’s Subsystem for Local Governance
The first half was compiled without any instructions from the transcription.
The latter half is my viewpoint (=clipping point) based on my notes.
2025-01-06
There are [People who go round and round in circles talking in the abstract.
He doesn't recognize it as going in circles.
There is a dependency of prerequisite knowledge of understanding apart from concrete abstractions.
AI summaries tend to be more abstract and do not require prerequisite knowledge.
This has something in common with the ungrounded abstract talkers.
That's why it's not funny.
These people don't get abstract stories by abstracting them from their own experience, they just accumulate what they happen to understand (or at least remember in its entirety because it was a cool phrase) without prerequisites by skimming through books with abstracts in them and spitting them out in that form.
Taking the top of the pyramid is not what I expected.
I can only quote what I don't understand.
o1 Pro.iconThe discussion so far has revealed that "to and fro between concrete and abstract" and "sharing of presuppositional knowledge" are essential elements for deepening knowledge, and that both AI summaries tend to become abstract and human abstract discussions become groundless because "the connection with concrete examples based on presuppositional knowledge" is weak. The common point that emerges is that both are based on a lack of "connection with concrete examples based on assumed knowledge".
As a new perspective, I would like to propose to explicitly address "the boundaries of presuppositional knowledge". This will make it easier to see the saltiness of "how much concrete examples to include" and "where to start abstracting".
[If you break the stereotype of "summary = abstraction" and use a "hybrid summary" that retains concrete examples, you can easily achieve both a deeper understanding and practicality.
We believe that by using AI as an "interactive sidekick to supplement one's prerequisite knowledge," the abstract model can be refined while strengthening the foundation of the knowledge pyramid.
These ideas should help you move beyond the passive attitude of "just reading AI summaries" or "listening to abstract discussions" to become an "active learner" who constantly monitors his/her own level of understanding and knowledge, moving back and forth between the concrete and the abstract. It will help you to become an "active learner" who moves between the concrete and the abstract, constantly monitoring your own understanding and knowledge level.
---
This page is auto-translated from /nishio/AI要約と新しく知ったこと using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.